Current Issue : January-March Volume : 2024 Issue Number : 1 Articles : 5 Articles
The degree of organised alignment of fibre structures, referred to as the degree of orientation, significantly influences the textural properties and consumer acceptance of fibrous foods. To develop a new method to quantitatively characterise the fibre structure of such foods, a laser transmission imaging system is constructed to capture the laser beam spot on a sample, and the resulting image undergoes a series of image processing steps that use computer vision to translate the light and dark variations of the original images into distinct ellipses. The results show that the degree of orientation can be reasonably calculated from the ellipse obtained by fitting the outermost isopixel points. To validate the reliability of the newly developed method, we determine the degree of orientation of typical fibrous foods (extruded beef jerky, pork jerky, chicken jerky, and duck jerky). The ranking of the measured orientation agrees with the results of pseudocolour maps and micrographs, confirming the ability of the method to distinguish different fibrous foods. Furthermore, the relatively small coefficients of variation and the strong positive correlation between the degree of organisation and the degree of orientation confirm the reliability of this newly developed method....
Monitoring the drinking behavior of animals can provide important information for livestock farming, including the health and well-being of the animals. Measuring drinking time is labor-demanding and, thus, it is still a challenge in most livestock production systems. Computer vision technology using a low-cost camera system can be useful in overcoming this issue. The aim of this research was to develop a computer vision system for monitoring beef cattle drinking behavior. A data acquisition system, including an RGB camera and an ultrasonic sensor, was developed to record beef cattle drinking actions. We developed an algorithm for tracking the beef cattle’s key body parts, such as head–ear–neck position, using a state-of-the-art deep learning architecture DeepLabCut. The extracted key points were analyzed using a long short-term memory (LSTM) model to classify drinking and non-drinking periods. A total of 70 videos were used to train and test the model and 8 videos were used for validation purposes. During the testing, the model achieved 97.35% accuracy. The results of this study will guide us to meet immediate needs and expand farmers’ capability in monitoring animal health and well-being by identifying drinking behavior....
In the process of charging and using electric vehicles, lithium battery may cause hazards such as fire or even explosion due to thermal runaway. Therefore, a target detection model based on the improved YOLOv5 (You Only Look Once) algorithm is proposed for the features generated by lithium battery combustion, using the K-means algorithm to cluster and analyse the target locations within the dataset, while adjusting the residual structure and the number of convolutional kernels in the network and embedding a convolutional block attention module (CBAM) to improve the detection accuracy without affecting the detection speed. The experimental results show that the improved algorithm has an overall mAP evaluation index of 94.09%, an average F1 value of 90.00%, and a real-time detection FPS (frames per second) of 42.09, which can meet certain real-time monitoring requirements and can be deployed in various electric vehicle charging stations and production platforms for safety detection and will provide a guarantee for the safe production and development of electric vehicles in the future....
In order to realize the automatic detection system of electric sensor, a method based on sensor and nonlinear machine vision is proposed. Aiming at complex scenes and dynamic changes in target recognition and detection in largescale industrial field, a target recognition and detection system based on the fusion of vision sensor and nonlinear machine vision is proposed. The system introduces nonlinear features and uses deep neural network to realize multi-scale analysis and recognition of image data on the basis of traditional machine vision. The system uses C++ language development and has a good user interface. The photoelectric sensor weld image is collected by machine vision technology, the target area of the image is detected by Gaussian model, the feature points of the target area are extracted by combining Hessian matrix, the extracted feature points are input into the quantum gate neural network model, and the recognition results are obtained. The simulation results show that the author’s method has the highest value among the three test indicators, with the highest accuracy rate of 97%, the highest recall rate of 98%, and the highest F1 value of 94. The time consumed by the author’s method for automatic identification of photoelectric sensor welding is within 6 s, the time spent on the film wall recognition method for automatic identification of photoelectric sensor welding is within 20 s, and the time spent by the feature extraction and identification method for automatic identification of photoelectric sensor weld is within 22 s. It has been proven that the method based on the fusion of sensors and nonlinear machine vision can achieve an automatic recognition and detection system for electrical sensor welds. The object detection and recognition method proposed in this article can be applied to dynamic changes and complex scenes in various complex backgrounds and has a good application prospect. The system proposed in this article has some limitations, such as the algorithm in the calculation accuracy, real-time, and other aspects that have room for improvement....
Our research focused on creating an advanced machine-learning algorithm that accurately detects anomalies in chest X-ray images to provide healthcare professionals with a reliable tool for diagnosing various lung conditions. To achieve this, we analysed a vast collection of X-ray images and utilised sophisticated visual analysis techniques; such as deep learning (DL) algorithms, object recognition, and categorisation models. To create our model, we used a large training dataset of chest X-rays, which provided valuable information for visualising and categorising abnormalities. We also utilised various data augmentation methods; such as scaling, rotation, and imitation; to increase the diversity of images used for training. We adopted the widely used You Only Look Once (YOLO) v8 algorithm, an object recognition paradigm that has demonstrated positive outcomes in computer vision applications, and modified it to classify X-ray images into distinct categories; such as respiratory infections, tuberculosis (TB), and lung nodules. It was particularly effective in identifying unique and crucial outcomes that may, otherwise, be difficult to detect using traditional diagnostic methods. Our findings demonstrate that healthcare practitioners can reliably use machine learning (ML) algorithms to diagnose various lung disorders with greater accuracy and efficiency....
Loading....